88 research outputs found

    Efficient non-malleable codes and key derivation for poly-size tampering circuits

    Get PDF
    Non-malleable codes, defined by Dziembowski, Pietrzak, and Wichs (ICS '10), provide roughly the following guarantee: if a codeword c encoding some message x is tampered to c' = f(c) such that c' ≠ c , then the tampered message x' contained in c' reveals no information about x. The non-malleable codes have applications to immunizing cryptosystems against tampering attacks and related-key attacks. One cannot have an efficient non-malleable code that protects against all efficient tampering functions f. However, in this paper we show 'the next best thing': for any polynomial bound s given a-priori, there is an efficient non-malleable code that protects against all tampering functions f computable by a circuit of size s. More generally, for any family of tampering functions F of size F ≀ 2s , there is an efficient non-malleable code that protects against all f in F . The rate of our codes, defined as the ratio of message to codeword size, approaches 1. Our results are information-theoretic and our main proof technique relies on a careful probabilistic method argument using limited independence. As a result, we get an efficiently samplable family of efficient codes, such that a random member of the family is non-malleable with overwhelming probability. Alternatively, we can view the result as providing an efficient non-malleable code in the 'common reference string' model. We also introduce a new notion of non-malleable key derivation, which uses randomness x to derive a secret key y = h(x) in such a way that, even if x is tampered to a different value x' = f(x) , the derived key y' = h(x') does not reveal any information about y. Our results for non-malleable key derivation are analogous to those for non-malleable codes. As a useful tool in our analysis, we rely on the notion of 'leakage-resilient storage' of DavĂŹ, Dziembowski, and Venturi (SCN '10), and, as a result of independent interest, we also significantly improve on the parameters of such schemes

    Leveled Fully Homomorphic Signatures from Standard Lattices

    Get PDF
    In a homomorphic signature scheme, a user Alice signs some large data xx using her secret signing key and stores the signed data on a server. The server can then run some computation y=g(x)y=g(x) on the signed data and homomorphically produce a short signature σ\sigma. Anybody can verify the signature using Alice\u27s public verification key and become convinced that yy is the correct output of the computation gg over Alice\u27s data, without needing to have the underlying data itself. In this work, we construct the first leveled fully homomorphic signature schemes that can evaluate arbitrary circuits over signed data, where only the maximal depth dd of the circuit needs to be fixed a priori. The size of the evaluated signature grows polynomially in dd, but is otherwise independent of the circuit size or the data size. Our solutions are based on the hardness of the small integer solution (SIS) problem, which is in turn implied by the worst-case hardness of problems in standard lattices. We get a scheme in the standard model, albeit with large public parameters whose size must exceed the total size of all signed data. In the random-oracle model, we get a scheme with short public parameters. These results offer a significant improvement in capabilities and assumptions over the best prior homomorphic signature scheme due to Boneh and Freeman (Eurocrypt \u2711). As a building block of independent interest, we introduce a new notion called homomorphic trapdoor functions (HTDF). We show to how construct homomorphic signatures using HTDFs as a black box. We construct HTDFs based on the SIS problem by relying on a recent technique developed by Boneh et al. (Eurocrypt \u2714) in the context of attribute based encryption

    Small-Box Cryptography

    Get PDF
    One of the ultimate goals of symmetric-key cryptography is to find a rigorous theoretical framework for building block ciphers from small components, such as cryptographic S-boxes, and then argue why iterating such small components for sufficiently many rounds would yield a secure construction. Unfortunately, a fundamental obstacle towards reaching this goal comes from the fact that traditional security proofs cannot get security beyond 2^{-n}, where n is the size of the corresponding component. As a result, prior provably secure approaches - which we call "big-box cryptography" - always made n larger than the security parameter, which led to several problems: (a) the design was too coarse to really explain practical constructions, as (arguably) the most interesting design choices happening when instantiating such "big-boxes" were completely abstracted out; (b) the theoretically predicted number of rounds for the security of this approach was always dramatically smaller than in reality, where the "big-box" building block could not be made as ideal as required by the proof. For example, Even-Mansour (and, more generally, key-alternating) ciphers completely ignored the substitution-permutation network (SPN) paradigm which is at the heart of most real-world implementations of such ciphers. In this work, we introduce a novel paradigm for justifying the security of existing block ciphers, which we call small-box cryptography. Unlike the "big-box" paradigm, it allows one to go much deeper inside the existing block cipher constructions, by only idealizing a small (and, hence, realistic!) building block of very small size n, such as an 8-to-32-bit S-box. It then introduces a clean and rigorous mixture of proofs and hardness conjectures which allow one to lift traditional, and seemingly meaningless, "at most 2^{-n}" security proofs for reduced-round idealized variants of the existing block ciphers, into meaningful, full-round security justifications of the actual ciphers used in the real world. We then apply our framework to the analysis of SPN ciphers (e.g, generalizations of AES), getting quite reasonable and plausible concrete hardness estimates for the resulting ciphers. We also apply our framework to the design of stream ciphers. Here, however, we focus on the simplicity of the resulting construction, for which we managed to find a direct "big-box"-style security justification, under a well studied and widely believed eXact Linear Parity with Noise (XLPN) assumption. Overall, we hope that our work will initiate many follow-up results in the area of small-box cryptography

    Universal Amplification of KDM Security: From 1-Key Circular to Multi-Key KDM

    Get PDF
    An encryption scheme is Key Dependent Message (KDM) secure if it is safe to encrypt messages that can arbitrarily depend on the secret keys themselves. In this work, we show how to upgrade essentially the weakest form of KDM security into the strongest one. In particular, we assume the existence of a symmetric-key bit-encryption that is circular-secure in the 11-key setting, meaning that it maintains security even if one can encrypt individual bits of a single secret key under itself. We also rely on a standard CPA-secure public-key encryption. We construct a public-key encryption scheme that is KDM secure for general functions (of a-priori bounded circuit size) in the multi-key setting, meaning that it maintains security even if one can encrypt arbitrary functions of arbitrarily many secret keys under each of the public keys. As a special case, the latter guarantees security in the presence of arbitrary length key cycles. Prior work already showed how to amplify nn-key circular to nn-key KDM security for general functions. Therefore, the main novelty of our work is to upgrade from 11-key to nn-key security for arbitrary nn. As an independently interesting feature of our result, our construction does not need to know the actual specification of the underlying 1-key circular secure scheme, and we only rely on the existence of some such scheme in the proof of security. In particular, we present a universal construction of a multi-key KDM-secure encryption that is secure as long as some 1-key circular-secure scheme exists. While this feature is similar in spirit to Levin\u27s universal construction of one-way functions, the way we achieve it is quite different technically, and does not come with the same ``galactic inefficiency\u27\u27

    Tamper Detection and Continuous Non-Malleable Codes

    Get PDF
    We consider a public and keyless code (\Enc,\Dec) which is used to encode a message mm and derive a codeword c = \Enc(m). The codeword can be adversarially tampered via a function f \in \F from some tampering function family \F, resulting in a tampered value c2˘7=f(c)c\u27 = f(c). We study the different types of security guarantees that can be achieved in this scenario for different families \F of tampering attacks. Firstly, we initiate the general study of tamper-detection codes, which must detect that tampering occurred and output \Dec(c\u27) = \bot. We show that such codes exist for any family of functions \F over nn bit codewords, as long as |\F| < 2^{2^n} is sufficiently smaller than the set of all possible functions, and the functions f \in \F are further restricted in two ways: (1) they can only have a few fixed points xx such that f(x)=xf(x)=x, (2) they must have high entropy of f(x)f(x) over a random xx. Such codes can also be made efficient when |\F| = 2^{\poly(n)}. For example, \F can be the family of all low-degree polynomials excluding constant and identity polynomials. Such tamper-detection codes generalize the algebraic manipulation detection (AMD) codes of Cramer et al. (EUROCRYPT \u2708). Next, we revisit non-malleable codes, which were introduced by Dziembowski, Pietrzak and Wichs (ICS \u2710) and require that \Dec(c\u27) either decodes to the original message mm, or to some unrelated value (possibly ⊄\bot) that doesn\u27t provide any information about mm. We give a modular construction of non-malleable codes by combining tamper-detection codes and leakage-resilient codes. This gives an alternate proof of the existence of non-malleable codes with optimal rate for any family \F of size |\F| < 2^{2^n}, as well as efficient constructions for families of size |\F| = 2^{\poly(n)}. Finally, we initiate the general study of continuous non-malleable codes, which provide a non-malleability guarantee against an attacker that can tamper a codeword multiple times. We define several variants of the problem depending on: (I) whether tampering is persistent and each successive attack modifies the codeword that has been modified by previous attacks, or whether tampering is non-persistent and is always applied to the original codeword, (II) whether we can self-destruct and stop the experiment if a tampered codeword is ever detected to be invalid or whether the attacker can always tamper more. In the case of persistent tampering and self-destruct (weakest case), we get a broad existence results, essentially matching what\u27s known for standard non-malleable codes. In the case of non-persistent tampering and no self-destruct (strongest case), we must further restrict the tampering functions to have few fixed points and high entropy. The two intermediate cases correspond to requiring only one of the above two restrictions. These results have applications in cryptography to related-key attack (RKA) security and to protecting devices against tampering attacks without requiring state or randomness

    Obfuscating Conjunctions under Entropic Ring LWE

    Get PDF
    We show how to securely obfuscate conjunctions, which are functions f(x[subscript 1], . . . , x[subscript n]) = ∧[subscript i∈I] y[superscript i] where I ⊆ [n] and each literal y[subscript i] is either just x[subscript i] or ÂŹx[subscript i] e.g., f(x[subscript 1], . . . , x_n) = x[subscript 1] ⊆ ÂŹ x[subscript 3] ⊆ ÂŹ x[subscript 7] · · · ⊆ x[subscript n−1]. Whereas prior work of Brakerski and Rothblum (CRYPTO 2013) showed how to achieve this using a non-standard object called cryptographic multilinear maps, our scheme is based on an “entropic” variant of the Ring Learning with Errors (Ring LWE) assumption. As our core tool, we prove that hardness assumptions on the recent multilinear map construction of Gentry, Gorbunov and Halevi (TCC 2015) can be established based on entropic Ring LWE. We view this as a first step towards proving the security of additional multilinear map based constructions, and in particular program obfuscators, under standard assumptions. Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals nothing more than black-box access to f as an oracle, at least as long as (essentially) the conjunction is chosen from a distribution having sufficient entropy

    Candidate Obfuscation via Oblivious LWE Sampling

    Get PDF
    We present a new, simple candidate construction of indistinguishability obfuscation (iO). Our scheme is inspired by lattices and learning-with-errors (LWE) techniques, but we are unable to prove security under a standard assumption. Instead, we formulate a new falsifiable assumption under which the scheme is secure. Furthermore, the scheme plausibly achieves post-quantum security. Our construction is based on the recent split FHE framework of Brakerski, Döttling, Garg, and Malavolta (EUROCRYPT \u2720), and we provide a new instantiation of this framework. As a first step, we construct an iO scheme that is provably secure assuming that LWE holds \emph{and} that it is possible to obliviously generate LWE samples without knowing the corresponding secrets. We define a precise notion of oblivious LWE sampling that suffices for the construction. It is known how to obliviously sample from any distribution (in a very strong sense) using iO, and our result provides a converse, showing that the ability to obliviously sample from the specific LWE distribution (in a much weaker sense) already also implies iO. As a second step, we give a heuristic contraction of oblivious LWE sampling. On a very high level, we do this by homomorphically generating pseudoradnom LWE samples using an encrypted pseudorandom function

    Is there an Oblivious RAM Lower Bound for Online Reads?

    Get PDF
    Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an O(log⁥n)O(\log n) overhead per access, where nn is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a ``balls and bins\u27\u27 model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond ``balls and bins\u27\u27? The work of Boyle and Naor (ITCS \u2716) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with o(log⁥n)o(\log n) overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO \u2718) shows that there indeed is an Ω(log⁥n)\Omega(\log n) lower bound for general online ORAM. This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an o(log⁥n)o(\log n) overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds

    Separating Succinct Non-Interactive Arguments From All Falsifiable Assumptions

    Get PDF
    In this paper, we study succinct computationally sound proofs (arguments) for NP, whose communication complexity is polylogarithmic the instance and witness sizes. The seminal works of Kilian \u2792 and Micali \u2794 show that such arguments can be constructed under standard cryptographic hardness assumptions with four rounds of interaction, and that they be made non-interactive in the random-oracle model. The latter construction also gives us some evidence that succinct non interactive arguments (SNARGs) may exist in the standard model with a common reference string (CRS), by replacing the oracle with a sufficiently complicated hash function whose description goes in the CRS. However, we currently do not know of any construction of SNARGs with a formal proof of security under any simple cryptographic assumption. In this work, we give a broad black-box separation result, showing that black-box reductions cannot be used to prove the security of any SNARG construction based on any falsifiable cryptographic assumption. This includes essentially all common assumptions used in cryptography (one-way functions, trapdoor permutations, DDH, RSA, LWE etc.). More generally, we say that an assumption is falsifiable if it can be modeled as an interactive game between an adversary and an efficient challenger that can efficiently decide if the adversary won the game. This is similar, in spirit, to the notion of falsifiability of Naor \u2703, and captures the fact that we can efficiently check if an adversarial strategy breaks the assumption. Our separation result also extends to designated verifier SNARGs, where the verifier needs a trapdoor associated with the CRS to verify arguments, and slightly succinct SNARGs, whose size is only required to be sublinear in the statement and witness size

    A counterexample to the chain rule for conditional HILL entropy

    Get PDF
    Most entropy notions H(.) like Shannon or min-entropy satisfy a chain rule stating that for random variables X,Z, and A we have H(X|Z,A)≄H(X|Z)−|A|. That is, by conditioning on A the entropy of X can decrease by at most the bitlength |A| of A. Such chain rules are known to hold for some computational entropy notions like Yao’s and unpredictability-entropy. For HILL entropy, the computational analogue of min-entropy, the chain rule is of special interest and has found many applications, including leakage-resilient cryptography, deterministic encryption, and memory delegation. These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: we construct joint distributions (X,Z,A), where A is a distribution over a single bit, such that the HILL entropy H HILL (X|Z) is large but H HILL (X|Z,A) is basically zero. Our counterexample just makes the minimal assumption that NP⊈P/poly. Under the stronger assumption that injective one-way function exist, we can make all the distributions efficiently samplable. Finally, we show that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object
    • 

    corecore